Search Results: "tincho"

20 March 2015

Zlatan Todori : My journey into Debian

Notice: There were several requests for me to more elaborate on my path to Debian and impact on life so here it is. It's going to be a bit long so anyone who isn't interested in my personal Debian journey should skip it. :) In 2007. I enrolled into Faculty of Mechanical Engineering (at first at Department of Industrial Management and later transfered to Department of Mechatronics - this was possible because first 3 semesters are same for both departments). By the end of same year I was finishing my tasks (consisting primarily of calculations, some small graphical designs and write-ups) when famous virus, called by users "RECYCLER", sent my Windows XP machine into oblivion. Not only it took control over machine and just spawned so many processes that system would crash itself, it actually deleted all from hard-disk before it killed the system entirely. I raged - my month old work, full of precise calculations and a lot of design details, was just gone. I started cursing which was always continued with weeping: "Why isn't there an OS that can whithstand all of viruses, even if it looks like old DOS!". At that time, my roommate was my cousin who had used Kubuntu in past and currently was having SUSE dual-booted on his laptop. He called me over, started talking about this thing called Linux and how it's different but de facto has no viruses. Well, show me this Linux and my thought was, it's probably so ancient and not used that it probably looks like from pre Windows 3.1 era, but when SUSE booted up it had so much more beautiful UI look (it was KDE, and compared to XP it looked like the most professional OS ever). So I was thrilled, installed openSUSE, found some rough edges (I knew immediately that my work with professional CAD systems will not be possible on Linux machines) but overall I was bought. After that he even talked to me about distros. Wait, WTF distros?! So, he showed me distrowatch.com. I was amazed. There is not only a better OS then Windows - there where dozens, hundreds of them. After some poking around I installed Debian KDE - and it felt great, working better then openSUSE but now I was as most newbies, on fire to try more distros. So I was going around with Fedora, Mandriva, CentOS, Ubuntu, Mint, PCLinuxOS and in beginning of 2008 I stumbled upon Debian docs which where talking about GNU and GNU Manifesto. To be clear, I was always as a high-school kid very much attached to idea of freedom but started loosing faith by faculty time (Internet was still not taking too much of time here, youth still spent most of the day outside). So the GNU Manifesto was really a big thing for me and Debian is a social bastion of freedom. Debian (now with GNOME2) was being installed on my machine. As all that hackerdom in Debian was around I started trying to dig up some code. I never ever read a book on coding (until this day I still didn't start and finish one) so after a few days I decided to code tetris in C++ with thought that I will finish it in two days at most (the feeling that you are powerful and very bright person) - I ended it after one month in much pain. So instead I learned about keeping Debian system going on, and exploring some new packages. I got thrilled over radiotray, slimvolley (even held a tournament in my dorm room), started helping on #debian, was very active in conversation with others about Debian and even installed it on few laptops (I became de facto technical support for users of those laptops :D ). Then came 2010 which with negative flow that came in second half of 2009, started to crush me badly. I was promised to go to Norway, getting my studies on robotics and professor lied (that same professor is still on faculty even after he was caught in big corruption scandal over buying robots - he bought 15 years old robots from UK, although he got money from Norway to buy new ones). My relationship came to hard end and had big emotional impact on me. I fell a year on faculty. My father stopped financing me and stopped talking to me. My depression came back. Alcohol took over me. I was drunk every day just not to feel anything. Then came the end of 2010, I somehow got to the information that DebConf will be in Banja Luka. WHAT?! DebConf in city where I live. I got into #debconf and in December 2010/January 2011 I became part of the famous "local local organizers". I was still getting hammered by alcohol but at least I was getting out of depression. IIRC I met Holger and Moray in May, had a great day (a drop of rakia that was too much for all of us) and by their way of behaving there was something strange. Beatiful but strange. Both were sending unique energy of liberty although I am not sure they were aware of it. Later, during DebConf I felt that energy from almost all Debian people, which I can't explain. I don't feel it today - not because it's not there, it's because I think I integrated so much into Debian community that it's now a natural feeling which people here, that are close to me are saying that they feel it when I talk about Debian. DebConf time in Banja Luka was awesome - firstly I met Phil Hands and Andrew McMillan which were a crazy team, local local team was working hard (I even threw up during the work in Banski Dvor because of all heat and probably not much of sleep due to excitement), met also crazy Mexican Gunnar (aren't all Mexicans crazy?), played Mao (never again, thank you), was hanging around smart but crazy people (love all) from which I must notice Nattie (a bastion of positive energy), Christian Perrier (which had coordinated our Serbian translation effort), Steve Langasek (which asked me to find physiotherapist for his co-worker Mathias Klose, IIRC), Zach (not at all important guy at that time), Luca Capello (who gifted me a swirl on my birthday) and so many others that this would be a post for itself just naming them. During DebConf it was also a bit of hard time - my grandfather died on 6th July and I couldn't attend the funeral so I was still having that sadness in my heart, and Darjan Prtic, a local team member that came from Vienna, committed suicide on my birthday (23 July). But DebConf as conference was great, but more importantly the Debian community felt like a family and Meike Reichle told me that it was. The night it finished, me and Vedran Novakovic cried. A lot. Even days after, I was getting up in the morning having the feeling I need something to do for DebConf. After a long time I felt alive. By the end of year, I adopted package from Clint Adams and Moray became my sponsor. In last quarter of 2011 and beginning of 2012, I (as part of LUG) held talks about Linux, had Linux installation in Computer Center for the first time ever, and installed Debian on more machines. Now fast forwarding with some details - I was also on DebConf13 in Switzerland, met some great new friends such as Tincho and Santiago (and many many more), Santiago was also my roommate in Portland on the previous DebConf. In Switzerland I had really great and awesome time. Year 2014 - I was also at DebConf14, maintain a bit more packages and have applied for DD, met some new friends among which I must put out Apollon Oikonomopoulos and Costas Drogos which friendship is already deep for such a short time and I already know that they are life-long friends. Also thanks to Steve Langasek, because without his help I wouldn't be in Portland with my family and he also gave me Arduino. :) 2015. - I am currently at my village residence, have a 5 years of working experince as developer due to Debian and still a lot to go, learn and do but my love towards Debian community is by magnitude bigger then when I thought I love it at most. I am also going through my personal evolution and people from Debian showed me to fight for what you care, so I plan to do so. I can't write all and name all the people that I met, and believe me when I say that I remember most and all of you impacted my life for which I am eternally grateful. Debian, and it's community effect literally saved my life, spring new energy into me and changed me for better. Debian social impact is far bigger then technical, and when you know that Debian is a bastion of technical excellence - you can maybe picture the greatness of Debian. Some of greatest minds are in Debian but most important isn't the sheer amount of knowledge but the enormous empathy. I just hope I can in future show to more people what Debian is and to find all lost souls as me to give them the hope, to show them that we can make world a better place and that everyone is capable to live and do what they love. P.S. I am still hoping and waiting to see Bdale writing a book about Debian's history to this day - in which I think many of us would admire the work done by project members, laugh about many situations and have fun reading a book about project that was having nothing to do but fail and yet it stands stronger then ever with roots deep into our minds.

26 September 2014

Holger Levsen: 20140925-reproducible-builds

Reproducible builds? I never did any - manually :) I've never done a reproducible build attempt of any package, manually, ever. But what I've done now is setting up reproducible builds on jenkins.debian.net which will build hundreds or thousands of packages, hopefully reproducibly, regularily in the future. Thanks to Lunar's and many other peoples work, this was actually rather easy. If you want to do this manually, it should take you just a few minutes to setup a suitable build environment. So three days ago when I wasn't exactly bored I decided that it was a good moment to implement some reproducible build jobs on jenkins.d.n, and so I gave it a try and two hours later the basic implementation was working, and then it was an evening and morning of fine tuning until I was mostly satisfied. Since then there has been some polishing, but the basic setup is done and has been working since. What's the result? One job, reproducible_setup will just create a suitable environment for pbuilding reproducible packages as documented so well on the Debian wiki. And as that job runs 3.5 minutes only (to debootstrap from scratch), it's run daily. And then there are currently 16 other jobs, which test reproducible builds in different areas: d-i, core, some six major desktops and some selected desktop applications, some security + privacy related packages, some build chains we have in Debian, libreoffice and X.org. Most of these jobs run several hours, but luckily not days. And they discover packages which still fail to build reproducibly, which already has caused some bugs to be filed, eg. #762732 "libdebian-installer: please do not write timestamps in Doxygen generated documentation". So this is the output from testing the reproducibilty of all debian-installer packages: 72 packages were successfully built reproducibly, while 6 packages failed to do so. I was quite impressed by these numbers as AFAIK noone tried to build d-i reproducibly before.
72 packages successfully built reproducibly: userdevfs user-setup usb-discover udpkg tzsetup rootskel rootskel-gtk rescue preseed pkgsel partman-xfs partman-target partman-partitioning partman-nbd partman-multipath partman-md partman-lvm partman-jfs partman-iscsi partman-ext3 partman-efi partman-crypto partman-btrfs partman-basicmethods partman-basicfilesystems partman-base partman-auto partman-auto-raid partman-auto-lvm partman-auto-crypto partconf os-prober oldsys-preseed nobootloader network-console netcfg net-retriever mountmedia mklibs media-retriever mdcfg main-menu lvmcfg lowmem localechooser live-installer lilo-installer kickseed kernel-wedge kbd-chooser iso-scan installation-report installation-locale hw-detect grub-installer finish-install efi-reader dh-di debian-installer-utils debian-installer-netboot-images debian-installer-launcher clock-setup choose-mirror cdrom-retriever cdrom-detect cdrom-checker cdebconf-terminal cdebconf-entropy bterm-unifont base-installer apt-setup anna 
6 packages failed to built reproducibly: win32-loader libdebian-installer debootstrap console-setup cdebconf busybox 
What's also impressive: all packages for the newly introduced Cinnamon Desktop build reproducibly from the start! The jenkins setup is configured via just three small files: That's it and that's enough to keep several cores busy for days. :-) But as each job only takes a few hours each is scheduled twice a month and more jobs and packages shall be added in future (with some heuristics to schedule known good packages less often...) I guess it's an appropriate opportunity to say "many thanks to Profitbricks", who have been donating the powerful virtual machine jenkins.debian.net is running on since October 2012. I also want to say "many many thanks to Helmut" (Grohne) who has recently joined me in maintaining this jenkins setup. And then I'd like to thank "the KGB trio" (Gregor, Tincho and Dam!) for providing those KGB bots on IRC, which are very helpful for providing notifications on IRC channels and last but not least thanks to everybody who contributed so that reproducible builds got this far! Keep up the jolly good work! And If you happen to know failing packages not included in job-cfg/reproducible.yaml I'd like to hear about those, so they'll get regularily tested and appear on the radar, until finally bugs are filed, fixed and migrated to stable. So one day all binary packages in Debian stable will be build reproducibly. An important step on this road is probably to have this defined as an release goal for Jessie+1. And then for jessie+1 hopefully the first 10k packages will build reproducibly? Or whooping 23k maybe? ;-) And maybe release jessie+2 with 100%?!? We will see! Even Jessie already has quite some packages (someone needs to count them...) which build reproducibly with just modified dpkg(-dev) and debhelper packages alone... So let's fix all the bugs! That said, an easier start for most of you is probably the list of useful things you (yes, you!) can do! :-) Oh, and last but surely not least in my book: many thanks too to the nice people hosting me so friendly in the last days! Keep on rockin'!

22 May 2014

Martín Ferrari: Yakker, part 4: the client application

This is the fourth post in a series of posts (part 1, part 2, part 3) describing a secure alternative to applications like WhatsApp. I started with the following statement:
I believe it is possible to build a system with the simplicity and functionality of WhatsApp or Viber, which provides end-to-end encryption, is built on free software and open protocols, that supports federation and is almost decentralised, and that would allow interested companies to turn a profit without compromising any of these principles.
Now that most of the infrastructure has been described, in this post I will talk about the user-visible part: a mobile SIP client specially tailored for this architecture.

Features For Yakker to be successful, an application that is visually attractive and simple to use, while providing excellent call quality and stability, is critical. This idea is useless if only geeks adopt it: I want my parents to use it, I want my non-techie friends to use it. I want to tell them to switch from any of the other applications, not only because it is more secure, open and community-based, but also because it is better. For Android, there is already some excellent free applications that could be used as a starting point: Lumicall, CSipSimple and Linphone. I haven't tried it yet, but Linphone has a port to iOS too. Apart from the quality considerations, the following features must be added:
  • Certificate creation and proper storage in the mobile device.
  • An account creation wizard that interacts with the directory service and the account providers. Lumicall currently does something like this already, but only for their own ENUM and SIP server only.
  • Proper DNSSEC validating resolver to securely get the callee's certificate and ENUM record.
  • Optionally, use the directory service API to query records instead of DNS, to enhance privacy.
  • Periodic verification of the user's own ENUM and certificate records in DNS.
  • Use of those certificates to set up the SRTP stream, instead of unauthenticated encryption. Text messages must be encrypted in the same way, but included in the SIP message. SIPs encryption is not enough, as the proxies and service providers can read them.
  • Integration with the system's address book.
  • ENUM lookup, and SIP SRV lookup to detect phones with an associated SIP account.
  • When the called party does not have a SIP account, the client must offer to call using the PSTN gateway at service provider, if it provides one, and to call using the GSM network. It needs to be fast and reliable, so people can use it as the default texting and calling application.
  • Provide an interface to query account's balance, calling rates, and to buy credit. Possibly, offer this by opening a web browser, but negotiating authentication first, so the user does not need to enter an user name or password.
  • Capabilities to migrate to a different service provider.
  • A secure way to share the key pair with another trusted device.
  • A way to import a key pair created by the user manually.
I am probably missing a bunch of other capabilities that need to be implemented. It is a lot of work. Some of these features would only be useful for participants of the Yakker network, but there are others that could be useful for every SIP user, and therefore, implemented first. For example, SIP accounts that are not associated with a phone number could publish certificates under the same domain, and have the client use them to have secure communications with legacy infrastructure.

Funding I am aware that this amount of work is not going to happen overnight. In fact, without a bunch of people from the community interested in the project, it is never going to happen. The good news is that I think that companies might be interested in investing in this. A company could create their own branded version of the client, and use advertisements or service provider preference (the provider the user gets unless they choose one manually) to generate income. As long as the code remains free, and the client is compatible with the whole system, there could many competing clients out there. Their own promotion schemes would work for the benefit of the whole system, by bringing more users.

Risks The biggest threat would be of one client monopolising the network, and then changing the protocols to make the system a walled garden. It has already happened with GTalk, so there is precedent for this. I don't think there is any way to stop a big company from doing this, but one strategy to mitigate the risk a bit would be to create a brand (Yakker or whatever this ends up being called) and have it managed by a trusted community organisation, which can revoke the right to use the brand when a party is not behaving.

19 May 2014

Martín Ferrari: Yakker, part 3: the service providers

This is the third post in a series of posts (part 1, part 2) describing a secure alternative to applications like WhatsApp. I started with the following statement:
I believe it is possible to build a system with the simplicity and functionality of WhatsApp or Viber, which provides end-to-end encryption, is built on free software and open protocols, that supports federation and is almost decentralised, and that would allow interested companies to turn a profit without compromising any of these principles.
In this post, I will discuss the SIP service providers, how to enable them to make a profit, while keeping the network open and secure.

Basics Service providers are expected to be third parties, either for-profit or not. PSTN termination would be an optional feature, possibly only offered by for-profit services. The services would consist in just a standard SIP service, with STUN/ICE for NAT traversal, plus an API for getting PSTN termination rates, credit balance, account creation, and for buying credit. All authentication will be performed with public key cryptography, using the data already available in the directory service. The idea would be that companies can make a profit by selling PSTN termination and SMS sending. In this way, users get good rates for calling/texting people who are not part of the network, and free SIP-to-SIP calls don't impose a big load on the service. It is expected that the providers would offer some free calls, to allow the users to test the service. Public providers like Betamax already offer free calls to land lines in many countries. To make this simple for the users, the client application would use the API to provide an unified interface to the service providers, without the need for passwords or a web browser. To support multiple services, the overseeing authority must be in charge of compiling a list of authorised providers, which then the client application would use when creating an account. Provider selection can be made automatically, depending on commercial agreements or user location, and for power users, the option to manually choose a provider should be offered. To ensure control of the providers, they should not be able to control the directory servers used. The client application should be careful in exposing as little private data as possible to them.

Account creation With the components described so far, we can think already of how account creation and first calls would be handled:
  1. The client application creates a public/private key pair.
  2. It requests the directory service to create an account for the public key and the phone number.
  3. The directory service uses an encrypted SMS challenge or other similar mechanism to authenticate the request.
  4. The public key is published, associated with the phone number.
  5. The client app gets a list of service providers, which includes configuration parameters and PSTN rates, and offers the user the chance to select one, or it is automatically selected.
  6. The client app then requests an account creation using the phone number as identification, while authenticating with the published key.
  7. The provider uses DNSSEC to validate the request, and an account is created.
  8. The client then creates an ENUM record in the directory service, associating the phone number with the SIP account.
  9. The client registers with the SIP service, again using the public key to authenticate, and can start making calls right away.
  10. When the user wants to make a non-free call, the application can offer to buy credit, or use the GSM network.

How is this different As you can see, there is not much difference with the current status. In fact, I would expect that traditional VoIP providers could become part of this network without much additional cost. The big differences are:
  • Authentication is delegated to the directory service.
  • The provider API, wich handles all administrative tasks, usually offered by web applications.
  • Account creation is 100% automated and immediate.
  • The provider must offer federated SIP service, and use encryption for all SIP transactions.
  • PSTN termination would ideally accept encrypted RTP streams, but that's probably too much to ask.

What's next In the next post I will describe what's possibly the most challenging part of this project: the client application.

14 May 2014

Martín Ferrari: Yakker, part 2: the directory service

This is the second post in a series of posts describing a secure alternative to applications like WhatsApp. I started with the following statement:
I believe it is possible to build a system with the simplicity and functionality of WhatsApp or Viber, which provides end-to-end encryption, is built on free software and open protocols, that supports federation and is almost decentralised, and that would allow interested companies to turn a profit without compromising any of these principles.
In this post, I will outline the concepts behind the most critical component of the architecture: the directory service.

Outline I say this is the most critical component, because here lies what makes this architecture different, but also because this is the weakest link in the whole idea. I expect criticism, specially on some security trade-offs, and I hope that people that know better than me can help me improve it. The directory service is what allows the users to register easily, without using passwords, to be able to receive calls and messages from other users, even if they are not part of the network, and to do all this with a reasonable security model.

Let's get technical The directory service is basically a DNSSEC-protected DNS zone serving ENUM records, along with public keys associated with each user identifier. A TLS-enabled API will enable account creation and validation, DNS records publishing, and encrypted records querying. Applications not supporting that API use standard ENUM querying, and the user manually uses traditional web-based methods for account and records management. This allows interoperability with any existing clients and SIP services. The service will authenticate users by using the usual methods to probe ownership of phone numbers: sending an encrypted SMS with a secret that the client application uses then to claim the phone number. This same method can be used to validate ownership of identifiers that are not phone numbers, like pre-existing SIP or email addresses. Once the user is authenticated, the directory service will publish the user's public key and SIP address, both associated with the phone number. When a user wants to place a call or send a message, it uses a DNSSEC-enabled resolver to get securely the other party's public key and SIP address. The user can also perform bulk look-ups, to discover which people on theirs address book is already in the system. These operations disclose an important amount of private data, and I don't think this can be mitigated in an acceptable manner, and therefore the directory service needs to forget the queries as soon as possible, and not to store any logs of these. Also, DNS queries are not encrypted, and are thus vulnerable to snooping by third parties. To mitigate this, the service needs to implement an encrypted but anonymous API to perform queries, and thus offer extra privacy to clients that support the API. The idea is to have more than one service running on different domain names, but not many: they need to keep consistency and replicate among them, and the security and privacy implications of one service not being properly implemented or administered are too big. Therefore, there must not be more than a handful of these, they need to be properly audited, and must not be operated by any for-profit organisation.

Security The Web of Trust is hard. End users don't like hard. Traditional PKI is prohibitely costly, and broken. OTR is good, but is still not hassle-free. Using one of the proposed extensions to DANE, we can solve this problem: using DNSSEC, we can have each users' public key published where everybody can retrieve it securely. This published key can then be used for end-to-end encryption and client authentication with all the components of this architecture. Who creates the key pair? In the simple case, the client application creates the key pair, and stores it in an appropriate container in the mobile device. It then chooses one of many cooperating directory services, and gets the public key associated with the phone number. If the private key is lost (lost your phone?), a new pair is generated and published, after passing the same checks of number ownership, and the old keys are discarded. This opens the door for some attacks, but those can be mitigated by having the client application verify the public records periodically, and the service requiring extra checks for key replacement. For example, by also using a challenge sent by email. What if I want to do things my way? Perfect, you create your keys, and then use the same mechanisms to register with the identity service. Also, you use the service to tell other users to connect to your own SIP server when they want to talk to your phone number. Does this sound like ENUM? It is ENUM, but better. Just by adding DNSSEC, an unified API, and the capability of publishing key material along with the routing information, you got yourself a reasonably secure way of distributing keys and locating users.

Interaction with other components Once the user records are published, the SIP server can use the public key to authenticate the user, and removes the need for passwords. The same principle applies to account creation: if the user has published key material under a phone number's record, the SIP server must accept account creation requests for the same phone number, provided these are authenticated with the private key. If the user wants to switch providers, it is as simple as creating a new account in the new provider, and then updating the ENUM record. The client application queries the service (and possibly other ENUM providers) before placing a call or sending a message. If the peer has keys published, the client can refuse to communicate if the keys don't match, or the peer is not offering call encryption. When public keys are not found, the client can downgrade to traditional unauthenticated encryption, or unencrypted communications.

To be continued In the next posts, I will talk about the overseeing organisation, the SIP service providers, and the client application, and how they all fit together. Stay tuned!

Martín Ferrari: Introducing Yakker: an open, secure and distributed alternative to WhatsApp

Did I get your attention with the title? Good. In this post I will outline something that I have been thinking about for some months:
I believe it is possible to build a system with the simplicity and functionality of WhatsApp or Viber, which provides end-to-end encryption, is built on free software and open protocols, that supports federation and is almost decentralised, and that would allow interested companies to turn a profit without compromising any of these principles.

Introduction This is the first post of a series that I will be publishing in the next few days. Many parts of these posts will be technical, but I expect that the main concepts can be understood by a wider audience. What I am proposing seems like a bold statement, I know. Maybe there is some fatal flaw somewhere in my thinking, and that is why I am publishing this: I hope to get constructive feedback, and maybe get enough traction to start implementing it Real Soon Now . I have been thinking about this problem since February, when I discussed this extensively with friends at FOSDEM. I have already published a critique of Telegram, which had way more impact than I ever imagined, showing that there is people out there interested in this kind of stuff. The last posts about DNSSEC and DANE were part of my musings about this, too. There are many components that need to be built for this to happen. But more importantly, this can only be useful if it gains a critical mass. And that's why I think making this a viable business tool is very important. At the same time, that means I need to think extra carefully to make it impossible for any for-profit company to mutate this effort into Just Another Walled Garden. My goals for this architecture are:
  • First and foremost, target the same people that nowadays is using a plethora of walled gardens for their instant communication needs. That is WhatsApp, Viber, Skype, Facebook messenger, etc.
  • It must focus on mobile, that is what people care about, without forgetting about other use cases.
  • Creating an account and placing the first call/text message should be as easy as it is currently with the competition.
  • All communication must be encrypted end-to-end with public-key cryptography; nobody but the user has access to the private keys.
  • Most components must be decentralised, and allow for competition.
  • There should be as little trust as possible placed on any part of the system.
  • Anybody can set up a compatible service provider and offer it to its users, while having full interoperability with other providers.
  • Compatible services which are not part of the network must be able to interoperate.
  • Contacting a person should happen even if they are not subscribed to the service. The client application must fall-back seamlessly to using interoperability gateways, PSTN termination, or the mobile network.
  • Interoperability with the competition is desirable, but possibly is better left to be implemented by the client applications.

Components I have identified a few components needed for this to work, I will expand on each one later.
  1. A flagship mobile application for Android and iOS, based on Lumicall or CSIPSimple, but with several important modifications.
  2. One or many directory and authentication services, based on ENUM and DNSSEC. These are the most critical piece of this idea, and possibly must only be operated by community-governed non-profits.
  3. One or many service providers, that offer simple account creation, registration, and optionally PSTN termination (which can be the main way of generating profit). An API needs to be defined for operations that are not part of the communications protocol, like account creation, credit purchasing, and balance querying.
  4. A network governing charter, and a trusted non-profit organisation that oversees that any participating parties are following the charter. This organisation defines which directory services are to be trusted (and possibly operates one of them), and which service providers the client application can use to create accounts.

Key points Some of the issues that need to be solved are:
  • How to handle and distribute public keys securely without the user understanding anything about security.
  • How to make registration painless and password-free, while offering an acceptable level of security.
  • How to fund development of the client application, and maintenance of the directory services.
  • How to get companies interested in this, so them would bring users to the network.
  • How to allow the user to migrate from one service provider to another, to improve competition.
  • How to prevent any party from subverting the spirit of the network.
  • How to make the client application work everywhere and have reasonably quality.

To be continued I think I have answers to most of these problems. I will elaborate in the next few days, stay tuned! :-)

Footnotes The name is something I've chosen a name in less than 2 minutes, while starting this post, so probably is awful. The distributed part is only half true, as the directory services need to be centralised, but I think it's good enough. I am aware that Lumicall seems to be trying to build something similar. I only found about that recently, when I was thinking about this design. Sadly, I think it has several shortcomings, but it is definitely one of the building blocks of this project.

25 April 2014

Martín Ferrari: More DNSSEC

After quite a few hours of work, I finally switched completely to DNSSEC. Both client-side in my notebook, and in my personal tincho.org domain. The client-side was pretty easy, although something broke in dnsmasq, but I had no patience to debug it, so I have just replaced it with a stock bind9 install, which is DNSSEC-enabled by default nowadays! To complement that, I added a plugin to Firefox/Iceweasel (WNPP bug #672845 pending, downloadable from the Czech NIC) that shows me with a nice icon if the DNS is secure or not (and in the newest versions, it also shows DANE status, yay!). So, basically, if you want to have DNSSEC support in your computer, just install unbound or bind9 (maybe dnsmasq, if you don't hit the same bug as me), it is really easy to have it up and running in no time. To test if it is correctly working, apart from that nifty plugin, you can visit this funny web page from Verisign labs. On the server side, it was trickier. It involves quite a few steps, and the default tools from package dnssec-tools are pretty buggy. But it was not too bad. After moving my domain from the registrar's DNS to my own server, configuring secondaries, etc. I went ahead with the DNSSEC configuration. I used this pretty good DNSSEC howto, which made the process a lot easier. After having my DNS server ready, I added the DS records in my registrar, and voil , tincho.org is now protected by DNSSEC! There are a few web services to test your deployment, a simple one, and a more complete one with GraphViz diagrams! I felt so bold with all this, that I went ahead and created DANE and SSHFP records for my services (and had to debug issues with SSH, because the old ssh-keygen tool would not create ECDSA records). And even set Postfix to use DANE to connect to remote hosts. Let's see how many things break in the following days!

22 April 2014

Martín Ferrari: DNSSEC, DANE, SSHFP, etc

While researching some security-related stuff for a post I am currently writing, I found some interesting bits here and there that I though I should share, as they were new to me, and probably for many others.

DNSSEC The first thing is DNSSEC. I knew about it, of course, but never bothered to dig much into it. While reading about many interesting applications of DNS for key distribution, and thinking of ways to use them, it is clear that DNSSEC is a precondition for any of that to work. In case you don't know about it, it is an extension for the DNS service to make it safer, for example, to avoid the bad guys having you think that google.com points to sniffer.nsa.gov. Apart from these ber-cool applications I was thinking about, avoiding DNS-based attacks becomes more and more relevant these days. And I think Debian and the rest of the Free Software world should work on making this available to all end-users as easily as possible. While adoption still looks pretty low, there are some good news. First, Google claims its public DNS supports DNSSEC. Of course, you need to trust Google servers, and the path between your machine and them. But if your resolver supports DNSSEC, you can use their servers and validate the answers. On the other side, I am not too sure about their implementation, as half of the time, it would return a valid answer to a query for an invalid record: dig +dnssec sigfail.verteiltesysteme.net @8.8.8.8). Also, they have not published DNSSEC records for google.com, which seems crazy. Some packages included in Debian already take advantage of DNSSEC, if available (more on that later), but more importantly, there are a couple of DNSSEC-enabled recursive servers, including bind, unbound, and the more commonly-used dnsmasq (there is a wiki page summarising Debian's status). Sadly, the default configuration for dnsmasq does not enable DNSSEC, and most people will not use it, even if it installed, because DHCP-provided servers are usually preferred. It seems to me that it would be wise to have a package that would install dnsmasq with DNSSEC enabled, and make it the only valid resolver for the system. If you want to check if your resolver is correctly validating DNSSEC, you can use this test web page. Another good news is that many top-level domains already support DNSSEC, and in my case, Gandi.net has support in place to set it up. So I am going to look into enabling it for my own domain.

SSHFP One useful and simple advantage of using DNSSEC, is that you can store information there, and then trust it to be correct. One new DNS RR (resource record) that is useful in this context is the SSHFP RR, which allows the sysadmin of a host to publish the host SSH key fingerprint in the DNS zone. The ssh client, when enabling the VerifyHostKeyDNS option, will use that information to trust unknown hosts. One downside to this, is that either if you set the option to ask, or if your resolver does not support DNSSEC, you get the same message, which does not warn you about the extra risk. To help you create your DNS records, you can just run this command:
$ ssh-keygen -r brie.tincho.org
brie.tincho.org IN SSHFP 1 1 6ac93c63379828b5b75847bc37d8ab2b48983343
brie.tincho.org IN SSHFP 2 1 cf0d11515367e3aa7eeb37056688f11b53c8ef23

DANE, S/MIME and GPG Recently, while at FOSDEM, I attended talks that mentioned DANE. This proposed IETF standard introduces a mechanism to use DNS as a secure key distribution system, which could completely override the CA oligopoly, a very attractive proposition for many people. In short, it is very similar to the SSHFP mechanism, but it is not restricted to SSH host keys: it can be used to distribute public key information for any TLS-enabled service. So, instead (or in addition to) of having a CA sign your certificate, and relying on the chain of trust by means of having a local copy of all root CA certificates, you use the chain of trust embedded in DNSSEC to make sure that the DNS RRs you publish are valid. Then, the client application can trust the fingerprint published for the relevant service to verify that it is talking to the right server. This is a very exciting development, and I hope it gets widespread adoption. It is already supported in Postfix, there seem to be some work going on in Mozilla, as well as in Prosody which is a great start. Another exciting development of this, is the generalisation of DANE for other entities, like email addresses. There are two draft RFCs being worked on right now to deploy S/MIME and OpenPGP key material using DNSSEC. This could also change completely the way we manage the Web of Trust.

31 March 2014

Martín Ferrari: Fun with Linux telephony

Continuing with my tendency to vent about stuff, today I want to talk about telephony. Since a few years ago, I need to use different VoIP providers to keep in touch with friends and family, in different parts of the world. So, I have a DID in Argentina (bought trough the excellent DIDWW), and another one in Ireland (which came for free when I was using BlueFace for call termination). I also have a service to handle outbound calls (FlowRoute), but it is not necessarily the only one I use, as cost and quality of Internet calls vary wildly. I have also used several Betamax providers, the aforementioned BlueFace, tried Netelip, etc. This results in having soft-phones installed in my mobile devices and laptop, and a hardware SIP phone that I carry around; all of them having configurations for at least 3 different providers. This does not lead to hilarity. In light of this, it's been a long time since I want to set up my own SIP router to be able to handle all of this, and be able to register to a single SIP proxy that will handle all the complexity. Last Friday, the Irish DID decided to stop working. It turns out, that since I don't have my own setup, I was using that provider as a kind of hub, with their provided voicemail, and terminating the Argentinian DID there. So the damage was big. This made me spend way too many hours during the weekend trying to set up some SIP solution. And I am not pleased.

Asterisk First, I went with the old and known Asterisk. The default installation in Debian puts 95 configuration files in /etc/asterisk, which you are supposed to review and adjust. Yes, you've read correctly: ninety-five different configuration files. None of them having anything close to a sensible explanation of their syntax or function. Also, not a remote hint of consistency. I could not find any configuration helper in the Debian archive, just hundreds of PHP-based projects scattered around the web. All the started guides I've found only guided you for the most basic tasks, but did not give you a way to have a functioning system. Needless to say, after a short time I grew tired of this, and decided to try something else.

Way too many options After this, I have spent an inordinate amount of hours, just trying to comprehend the difference between the gazillion different VoIP systems out there, I am still struggling to see that. Even if I understand that X is not a PBX, I don't exactly need a PBX, and most products deliver at least some of the features I need. It seems none of them makes a good job of just explaining what you can or can't do with their software. Documentation is awful in all the projects I researched. When it is there at all, it is incomplete, maybe super detailed at points, but in most cases, there is just no big picture view of the system to just start understanding how things work, and how to find your way. Sadly, not even the distros seems to be able to put a list of "recommended VoIP software for different needs".

Yate Finally, I've found YATE (Yet Another Telephony Engine). It seemed promising: not too bloated, fairly extensible, and scriptable in a few languages! Sadly, after many hours, it turned out to be a fiasco. The documentation seems decent, until you realise there are many key details left out. Basic information about how a call is handled cannot be found anywhere. Using the scripting power, I was able to find out at least the variables that were available, but that was not enough. I found a mailing list (with the worst archive reader I've seen in ages), where people made the very same questions I have, and nobody has replied. In years. So here I am, stuck with not being able to tell if a call is coming from a random host on the internet, or one of my DIDs, or one of the authenticated clients. I guess I will have to start from scratch with Yet Another (Another) Telephony Engine.

8 March 2014

Martín Ferrari: Fun with the Linux desktop

Or, "Why 2014 will NOT be the year of Linux in the desktop". So, it happens that my mum (66 yo) has been a Debian user for over a year now. With highs and lows, she manages to do what she needs; sometimes I need to intervene. Today I thought I could send her a quick email explaining how to download using BitTorrent, because of reasons. So, as I was writing, I realised that in many torrent sites, you only get a magnet link these days. No problem! Click on the magnet link, at it should work automagically. Then I remembered: it works on my computer, because I've spent a couple of hours some time ago researching how to make FireFox work with magnet links, creating a custom script, etc. I hoped that by now this must have been solved, at least in Debian unstable. Wrong again. I created a new user in my computer, launched IceWeasel/FireFox and boom: I get a dialog asking me to select a program, not from a list of desktop applications, taken from one of the gazillion sources where applications are defined, but just from any place on the file system! (At least, now you don't need to go tweaking with the hidden FireFox configuration editor). I was very angry at the brainiac at Mozilla who thought it was a great idea to ignore the host system and do their own MIMEtype handling. And then, tried Chromium to see what would happen... And I get first a scary message telling me that it is going to use the super-obscure xdg-open program to open my link, and that it could harm my computer! It was followed by another very helpful dialog telling me something like:
Unable to detect the URI-scheme of "magnet:?xt=urn:btih:diePh6iengei4quaep4shai8ahshahnae9 oolahtetheir2bohmu1eelaChui1ohdahruegh4wief6PusahDae4ho oshahjoogai7bae9shuvei9shufeX4boog8neichi3OoDee5ei9Uori c6aingairepon9gok8Mee7uRahphah4EucoopheiYin4xe4lahn0goh"
Then the real fun started... I starting looking around to understand how this is supposed to work, I wanted to provide a patch! So, it turns out that if you add some values to GConf this should work. So, try to find where would that be. Read about GConf schemas, default and mandatory values, and their 10 possible locations. Find that Azureus provides an schema, use that to create one for Transmission. Then find that in fact, Transmission was providing defaults, which are not the same but work the same, and that they had an error there: yes, problem found! (#741069) No! It turns out that the Gnome desktop does not use that any more, and now they scan the .desktop (who knows in which of the 100 directory tress where .destop files are present) files for MIME handlers, and the transmission-gtk.desktop file had that correctly. So why does it not work? Well, it turns out that if I used gvfs-open instead of xdg-open, it did work! The thing is, I am running XFCE here, which is GTK based, but it is not Gnome: instead of gvfs-open, I was getting exo-open, which is it's brain-dead cousin, and can't do anything but files, email and web. It is fecking 2014, and we still don't have a sensible, unified way to select preferred applications. We still have incompatible, duplicated, incomplete, competing implementations. We have FreeDesktop doing one thing to try and unify criteria, which is then ignored or mis-implemented up and down by some desktops and applications. Some days I get really angry at the Free Software world. PS: I guess I will tell mum to copy&paste the links from the browser to the torrent client, but not today. I have already lost 4 hours of sleep on this.

21 February 2014

Vasudev Kamath: Kontalk FLOSS alternative for Whatsapp and Co.

So Whatsapp has been acquired by Facebook and this news is still hot and people are discussing it all over the twitterverse. So I took this opportunity to stop using Whatsapp and removed it from my phone. Possibly I could have deleted my account but who cares. Anyway I've been searching for a secure and FLOSS alternative for Whatsapp for quite some time now, few days ago I found about Telegram but after reading post by Tincho on planet Debian, I decided not to use it. Recently while going through the talk list for fossmeet.in I found link for Kontalk in a privacy awareness talk proposal by Praveen and thought I should give it a try. So below is my first feelings about Kontalk.
Installation and Activation Kontalk can be installed from playstore for verification purpose it requests your phone number and country code and request for verification. This should send a SMS with a code which should be entered in the text box given and app is ready to use. There is also a possibility to use pre-existing verification code (if you got one from developer directly, read below for details). I did see some glitches like I never got the SMS delivered to my phone after 2 attempts and a days wait. Then I went ahead and reported a bug and the main developer was quick to respond. After a discussion it was noticed that SMS was blocked by spam filters. He also mentioned its tough to get SMS delivered to India. He was kind enough to provide me with a verification code and I used the Use existing code option to enter it and get Kontalk activated. The SMS delivery inconsistency is still present for India (and may be other nations too) some people get code immidiately others may be after couple of days and some might not. Upstream is already working on a possible workaround.
User Experience Now coming to usage part, the UI is neat and clean, I won't say super polished as Whatsapp or other popular apps but its really neat and easy to use. Some points which I like are
  • Ability to hide presence, so others won't know you are online or offline. (unlike Whatsapp which advertises last seen)
  • Encrypted messages and ability to optin or optout.
  • Encrypted status messages! Only user with your phone number can see your status. (Cool isn't it).
  • Manually requesting to find contacts who already uses kontalk!. Right it doesn't read your contact list without your permission you need to refresh to check who in your contacts is using Kontalk.
  • Attach and smiley options in the top right corner of Chat window which allows easy accessing unlike keyboard - smiley switching of Whatsapp.
  • No automatic download of media contents which is shared. Yes by default it doesn't download any picture or video automatically if you want to see something click on it and select download.
  • Running your own server for Kontalk! Now thats something which is interesting for people who doesn't want to host their data on some other peoples infrastructure. Code for server is available at xmppserver repo.
But there are some rough edges also but I'm sure this can be improved. But some points which I noticed are
  • Contact name disappears and only number is displayed. This is something which happened with one of my contact so I'm not really sure its a bug.
  • My friend noticed all his existing contacts suddenly vanished when he refreshed contacts list. Again this is possible bugs and we are considering reporting it upstream.
  • No group chats yet. I don't see a option to do that yet.
  • Attachment at the moment is restricted only to pictures (and video? never tried) and uploading takes quite sometime and sometime hangs forever.
So I'm considering forwarding these to upstream and help them by providing enough data so these can be fixed.
Technical Side All code for client server and protocol specs are available under GPL-v3 at the Kontalk project site. Server software is written in Python and I guess uses XMPP (but I've not cross verified). Server also uses MySQL as database. These can be hosted on our own servers but possibly needs more than that like SMS sending options etc.
Conclusion In my views Kontalk can become a great alternative for Whatsapp and co from Free Software world and I encourage every one to give it a try which will be the first step to help improving it.
Disclaimer: I'm not a privacy or security expert so whatever I shared above are what I noticed and experts may see something different than this. In any case I welcome comments and suggestions.

4 February 2014

Martín Ferrari: Telegram

This weekend I attended FOSDEM, as I did for the past 4 years. As always, it was a great experience, even if most of the talks I attended were not so interesting. One of the great things about these events, is to get together with other Free Software enthusiasts. One of them introduced me to the new kid on the block: Telegram. It is advertised as a free and secure replacement for WhatsApp, a mobile application I have been refusing to use for months, much to the anger of friends and family. People use WhatsApp because it saves them money to send SMS-like messages to other people. Which in my mind does not make much sense, as I can use email or XMPP for that same purpose, without having to enter just another walled garden. Sadly, people nowadays are using email less and less, and think that it is easier to send a Facebook message (it is not), and XMPP has became a de-facto walled garden since Google betrayed its users by dropping federation. WhatsApp has the advantage of having millions of users already inside their walled garden, like it happened with Facebook, MSN, or ICQ back in the day. Their success can possibly be attributed to the foolproof system it uses to discover contacts: it just sends your entire address book to their servers for matching against already registered users, which is in itself enough reason not to use it. So, when I was told of a secure, free and open alternative, I was eager to try it out. I started by opening their website to see what it was about. There, I found the first signs that something was wrong: no mention of licenses, almost no technical detail of how the protocol works, or how security is achieved. Still, my friend wanted to talk to me, so I rushed to install it and accept a laundry list of permissions in Android. That was a big mistake. The first thing the application did was to check my address book for contacts, without my permission or knowledge. I got greeted by being told that some of my contacts already have Telegram installed, and since then I keep getting notification that some more of my geek friends are installing it. So it is obvious that this company got all my records, breaking my privacy and security. This is enough for me to remove this piece of software from my phone, but it is not the end of the story. When checking it in more detail I found several other problems. The supposed "secret chat" cannot be possibly secure, as there is no verification of the remote party. It "just works", who cares if there is a man in the middle or not. Supposedly it employs a peer-to-peer connection, but I haven't verified that. On the other hand, the non-"secret" conversations are all routed to the main server. The client-to-server communication is supposedly encrypted, but it uses a home-made protocol (which everybody knows is a recipe for disaster), and the server has access to the cleartext of all your communications. Then, browsing the website, I've found that this "open" and "free" offer does not even have all the source code released. In particular, the server code is not public, nor you can set up your own server. Of course, it does not have federation, so even with the server code you wouldn't be able to talk with your friends. It all depends on two men funding and maintaining the project, so when the funds run out, one can only expect that all the users will be left in the dark. I think these arguments are enough to realise that Telegram is only marginally better than WhatsApp. It offers encryption at the transport layer, but you still are contributing to another walled garden, you are at the mercy of a company which does not have a funding plan, and the security practises range from weak to disturbing. In closing, if you are going to compromise your privacy, give away all your contacts' information, and rely on a single company to keep in touch with people, you might as well go and use what everybody else is using. Update: Even before submitting this post, I've found that some more qualified people has already dissected Telegram and concluded that it is basically snake oil. Please see this article. and proceed to uninstall Telegram.

4 November 2013

Martín Ferrari: Living in Buenos Aires: first week

Continuing my nomadic experiment, I have just arrived in Buenos Aires. My plan is to live here for 3 months, working and enjoying the city like I never did before: I always lived in the suburbs and only came to the city to work or study. After a few days seeing my immediate family and some close friends, today I finally went out, taking advantage of a beautiful spring day. And my first stop is the National Library. Biblioteca Nacional -   Dan DeLuca, downloaded from Wikipedia, licenced as Creative Commons Attribution 2.0 Generic I came with the intention of working a bit on some projects, but arriving as a visitor forbade me to enter the most interesting places of the library (now I have registered so next time they will not treat me as a tourist :)). It is an imposing building, I have seen it a few times before, but never entered it. I only learned today that in these grounds there used to be the presidential residence, back in Per n days, and then it was thoroughly demolished by the barbarians of the '55 coup. I also learned this new building was only opened in 1992, which is surprising, as I don't remember the fact at all, and I was already 14.

22 August 2013

Martín Ferrari: Setting up my server: re-installing on an encripted LVM

Very long post ahead (sorry for the wall of text), part of a series of posts on some sysadmin topics, see post 1 and post 2. I want to show you how I set up my tiny dedicated server to have encrypted partitions, and to reinstall it from scratch. All of this without ever accessing the actual server console.

Introduction As much as my provider may have gold standards on how to do things (they don't, there are some very bad practises in the default installation, like putting their SSH key into root's authorized_keys file), I wouldn't trust an installation done by a third party. Also, I wanted to have all my data securely encrypted. I know this is not perfect, and there are possible attacks. But I think it is a good barrier to have to deter entities without big budgets from getting my data. I have done this twice on my servers, and today I was reviewing each step as he was doing the same thing (with some slight differences) on his brand new server, so I think this is all mostly correct. Please, tell me if you find a bug in this guide. This was done on my 12 /month Kimsufi dedicated server, sold by OVH (see my previous post on why I chose it), and some things are specific to them. But you can do the same thing with any dedicated server that has a rescue netboot image. The process is to boot into the rescue image (this is of course a weak link, as the image could have a keylogger, but we have to stop the paranoia at some point), manually partition the disk, set-up encryption, and LVM; and then install a Debian system with debootstrap. To be able to unlock the encrypted disks, you will have to ssh into the server after a reboot and enter the passphrase (this is done inside the initrd phase). Once unlocked, the normal boot process continues. If anything fails, you end up with an unreachable system: it might or might have not booted, the disk might or might not be unlocked, etc. You can always go back into the rescue netboot image, but that does not allow you to see the boot process. Some providers will give you real remote console access, OVH charges you silly money for that. They used to offer a "virtual KVM", which was a bit of a kludge, but it worked: another netboot image that started a QEMU connected to a VNC server, so by connecting to the VNC server, you would be able to interact with the emulated boot process, but with a fake BIOS and a virtual network. For some unspecified reason they've stopped offering this, but there is a workaround available. The bottom line is, if you have some kind of rescue netboot image, you can just download and run QEMU on it and do the same trick.

The gritty details Start by netbooting into your rescue image. For OVH, you'd go to the control panel, in the Services/Netboot section and select "rescue pro". Then reboot your server. OVH will mail you a temporary password when it finishes rebooting. Connect to it, without saving the temporary SSH key:
$ ssh -oUserKnownHostsFile=/dev/null -oStrictHostKeyChecking=no root@$ IP 
For the rest of the text, I am assuming you have one hard drive called /dev/sda. We start by partitioning it:
# fdisk /dev/sda
Start a new partition table with o, and then create two primary partitions: a small one for /boot at the beginning (100 to 300 MB would do), and a second one with the remaining space. Set both as type 83 (Linux), and don't forget to activate the first one, as this servers refuse to boot from the hard drive without that. Create the file system for /boot, and the encrypted device:
# mkfs.ext4 /dev/sda1
# cryptsetup -s 512 -c aes-xts-plain64 luksFormat /dev/sda2
The encryption parameters are the same as the ones used by the Debian Installer by default, so don't change them unless you really know what you are doing. You will need to type a passphrase for the encrypted device, be sure not to forget it! This passphrase can later be changed (or secondary passphrases added) with the cryptsetup tool. Look up the crypt device's UUID, and save it for later:
# cryptsetup luksDump /dev/sda2   grep UUID:
UUID:           xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Open the encrypted device (type the passphrase again), and set up the LVM volume group:
# cryptsetup luksOpen /dev/sda2 sda2_crypt
# pvcreate /dev/mapper/sda2_crypt
# vgcreate vg0 /dev/mapper/sda2_crypt
Create the logical volumes, this is of course a matter of personal taste and there are many possible variations. This is my current layout, note that I put most of the "big data" in /srv.
# lvcreate -L 500m -n root vg0
# lvcreate -L 1.5g -n usr vg0
# lvcreate -L 3g -n var vg0
# lvcreate -L 1g -n home vg0
# lvcreate -L 10g -n srv vg0
# lvcreate -L 500m -n swap vg0
# lvcreate -L 100m -n tmp vg0
Some possible variations:
  • You can decide to use a ramdisk for /tmp, so instead of creating a logical volume, you would add RAMTMP=yes to /etc/default/tmpfs.
  • You can merge / and /usr in one same partition, as neither of them change much.
  • You can avoid having swap if you prefer.
  • You can put /home in /srv, and bind mount it later.
Now, create the file systems, swap space, and mount them in /target. Note that I like to use human-readable labels.
# for i in home root srv tmp usr var; do 
  mkfs.ext4 -L $i /dev/mapper/vg-$i; done
# mkswap -L swap /dev/mapper/vg0-swap
# mkdir /target
# mount /dev/mapper/vg0-root /target
# mkdir /target/ boot,home,srv,tmp,usr,var 
# mount /dev/sda1 /target/boot
# for i in home srv tmp usr var; do
  mount /dev/mapper/vg-$i /target/$i; done
# swapon /dev/mapper/vg-swap
Don't forget to set the right permissions for /tmp.
# chmod 1777 /target/tmp
If you want to do the /home on /srv, you'll need to do this (and then copy to /etc/fstab):
# mkdir /target/srv/home
# mount -o bind /target/srv/home /target/home
The disk is ready now. We will use debootstrap to install the base system. The OVH image carries it, otherwise consult the relevant section in the Install manual for details. It is important that at this point you check that you have a good GPG keyring for debootstrap to verify the installation source, by comparing it to a good one (for example, the one in your machine):
# gpg /usr/share/keyrings/debian-archive-keyring.gpg
pub  4096R/B98321F9 2010-08-07 Squeeze Stable Release Key <debian-release@lists.debian.org>
pub  4096R/473041FA 2010-08-27 Debian Archive Automatic Signing Key (6.0/squeeze) <ftpmaster@debian.org>
pub  4096R/65FFB764 2012-05-08 Wheezy Stable Release Key <debian-release@lists.debian.org>
pub  4096R/46925553 2012-04-27 Debian Archive Automatic Signing Key (7.0/wheezy) <ftpmaster@debian.org>
Now, for the actual installation. You can use any Debian mirror, OVH has their own in the local network. In OVH's case it is critical to specify the architecture, as the rescue image is i386. I didn't notice that and had to painfully switch architectures in place (which was absolutely not possible a couple of years ago).
# debootstrap --arch amd64 wheezy /target http://debian.mirrors.ovh.net/debian
After a few minutes downloading and installing stuff, you almost have a Debian system ready to go. Since this is not D-I, we still need to tighten a few screws manually. Let's mount some needed file systems, and enter the brand new system with chroot:
# mount -o bind /dev /target/dev
# mount -t proc proc /target/proc
# mount -t sysfs sys /target/sys
# XTERM=xterm-color LANG=C.UTF-8 chroot /target /bin/bash
The most critical parts now are to correctly save the parameters for the encrypted device, and the partitions and logical volumes. You'll need the UUID saved before:
# echo 'sda2_crypt UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx none luks' \
  > /etc/crypttab
Create the file systems table in /etc/fstab. Here I use labels to identify the devices:
# file system   mount point type    options             dump    pass
LABEL=root      /           ext4    errors=remount-ro   0       1
LABEL=tmp       /tmp        ext4    rw,nosuid,nodev     0       2
LABEL=var       /var        ext4    rw                  0       2
LABEL=usr       /usr        ext4    rw,nodev            0       2
LABEL=home      /home       ext4    rw,nosuid,nodev     0       2
# Alternative home in /srv:
#/srv/home      /home       auto    bind                0       0
LABEL=srv       /srv        ext4    rw,nosuid,nodev     0       2
LABEL=boot      /boot       ext4    rw,nosuid,nodev     0       2
LABEL=swap      none        swap    sw                  0       0
You can also just use the device mapper's names (/dev/mapper/<volume_group>-<logical_volume>), be sure not to use the /dev/<volume_group>/<logical_volume> naming, as there are some initrd-tools that choked on them.
# file system           mount point type    options             dump    pass
/dev/mapper/vg0-root    /           ext4    errors=remount-ro   0       1
/dev/mapper/vg0-tmp     /tmp        ext4    rw,nosuid,nodev     0       2
/dev/mapper/vg0-var     /var        ext4    rw                  0       2
/dev/mapper/vg0-usr     /usr        ext4    rw,nodev            0       2
/dev/mapper/vg0-home    /home       ext4    rw,nosuid,nodev     0       2
# Alternative home in /srv:
#/srv/home              /home       auto    bind                0   0
/dev/mapper/vg0-srv     /srv        ext4    rw,nosuid,nodev     0   2
/dev/sda1               /boot       ext4    rw,nosuid,nodev     0   2
/dev/mapper/vg0-swap    none        swap    sw                  0   0
Some tools depend on /etc/mtab, which now is just a symbolic link:
# ln -sf /proc/mounts /etc/mtab
Now configure the network. You most surely can use DHCP, but you might prefer static configuration, that's personal choice. For DHCP, it is very straightforward:
# cat >> /etc/network/interfaces
auto eth0
iface eth0 inet dhcp
For static configuration, first find the current valid addresses and routes as obtained by DHCP:
# ip address
# ip route
And then store them:
# cat >> /etc/network/interfaces
auto eth0
iface eth0 inet static
    address AAA.BBB.CCC.DDD/24
    gateway AAA.BBB.CCC.254
    pre-up /sbin/ip addr flush dev eth0   true
Note that pre-up command I added; that is to remove the configuration that is to be done by the kernel during boot (more on that later), otherwise ifupdown will complain about existing addresses. If your provider does IPv6, add it too. For OVH, the IPv6 set-up is a bit weird, so you need to add the routes in post-up. Your default gateway is going to be your /64 prefix, with the last byte replaced by ff, and then followed by :ff:ff:ff:ff. As you can see, that gateway is not in your network segment, so you need to add an explicit route to it. They have some information, but it is completely unreadable. If your IPv6 address is 2001:41D0:1234:5678::1/64, you will add:
iface eth0 inet6 static
    address 2001:41D0:1234:5678::1/64
    post-up /sbin/ip -6 route add 2001:41D0:1234:56ff:ff:ff:ff:ff dev eth0
    post-up /sbin/ip -6 route add default via 2001:41D0:1234:56ff:ff:ff:ff:ff
You probably don't want the auto-configured IPv6 addresses, so disable them via sysctl:
# cat >> /etc/sysctl.conf
# Disable IPv6 autoconf 
net.ipv6.conf.all.autoconf = 0
net.ipv6.conf.default.autoconf = 0
net.ipv6.conf.eth0.autoconf = 0
net.ipv6.conf.all.accept_ra = 0
net.ipv6.conf.default.accept_ra = 0
net.ipv6.conf.eth0.accept_ra = 0
To have a working DNS resolver, we can use the local server (OVH in this case):
# cat > /etc/resolv.conf 
search $DOMAIN
nameserver 213.186.33.99
The most important part of a new install: choose a host name (and make the system use it).
# echo $HOSTNAME > /etc/hostname
# hostname $HOSTNAME
# echo "127.0.1.1 $HOSTNAME.$DOMAIN $HOSTNAME" >> /etc/hosts
If we want to speficy the BIOS clock to use UTC:
# echo -e '0.0 0 0.0\n0\nUTC' > /etc/adjtime
Set up your time zone:
# dpkg-reconfigure tzdata
Configure APT with your preferred mirrors. I also prevent APT from installing recommends by default.
# echo deb http://ftp2.fr.debian.org/debian wheezy main contrib non-free \
  >> /etc/apt/sources.list
# echo deb http://ftp2.fr.debian.net/debian wheezy-updates main contrib non-free \
  >> /etc/apt/sources.list
# echo deb http://security.debian.org/ wheezy/updates main contrib non-free \
  >> /etc/apt/sources.list
# echo 'APT::Install-Recommends "False";' > /etc/apt/apt.conf.d/02recommends
# apt-get update
Before installing any package, let's make sure that the initial ram disk (initrd) that is going to be created will allow us to connect. There will be no chance of using the root password during boot. Your public key is usually found in $HOME/.ssh/id_rsa.pub.
# mkdir -p /etc/initramfs-tools/root/.ssh/
# echo $(YOUR_PUB_RSA_KEY) > /etc/initramfs-tools/root/.ssh/authorized_keys
If you change this, or the host key stored at /etc/dropbear/dropbear_*_host_key, the /etc/crypttab, or any other critical piece of information for the booting process, you need to run update-initramfs -u. Now we can install the missing pieces:
# apt-get install makedev cryptsetup lvm2 ssh dropbear busybox ssh \
  initramfs-tools locales linux-image-amd64 grub-pc kbd console-setup
During the installation you will have to choose where to install grub, I recommend directly on /dev/sda. Also, the magic initrd will be created. We want to double check that it has all the important pieces for a successful boot:
# zcat /boot/initrd.img-3.2.0-4-amd64   cpio -t conf/conf.d/cryptroot \
  etc/lvm/lvm.conf etc/dropbear/\* root/.ssh/authorized_keys sbin/dropbear
etc/lvm/lvm.conf
etc/dropbear/dropbear_dss_host_key
etc/dropbear/dropbear_rsa_host_key
sbin/dropbear
root/.ssh/authorized_keys
conf/conf.d/cryptroot
All these files need to be there. Most critically, we need to check that the cryptroot file has the right information to access the root file system:
# zcat /boot/initrd.img-*   cpio -i --to-stdout conf/conf.d/cryptroot
target=sda2_crypt,source=UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx,key=none,rootdev,lvm=vg0-root
If all that was correct, now we need to tell the kernel to configure the network as soon as possible so we can connect to the initrd and unlock the disks. This is done by passing a command-line option though grub. This should match what was done in /etc/network/interfaces: either DHCP or static configuration. For DHCP, this line should be changed in /etc/default/grub:
GRUB_CMDLINE_LINUX="ip=:::::eth0:dhcp"
For static configuration:
GRUB_CMDLINE_LINUX="ip=MY_IP_ADDR::MY_DEFAULT_GW:MY_NETMASK::eth0:none"
It is also a good idea to disable the quiet boot and graphical boot splash, in case we need to use QEMU to fix some booting issue:
GRUB_CMDLINE_LINUX_DEFAULT=""
GRUB_TERMINAL=console
And make the changes effective:
# update-grub2
Having fsck fix problems automatically can be a life-saver too:
# echo FSCKFIX=yes >> /etc/default/rcS
Get some very useful packages:
# apt-get install vim less ntpdate sudo
Create an user for yourself, possibly make it an administrator:
# adduser tincho
# adduser tincho sudo
# adduser tincho adm
This is mostly done, exit the chroot, and unmount everything.
# exit  # the chroot.
# umount /target/ dev,proc,sys,boot,home,srv,tmp,usr,var 
# umount /target
# swapoff -a
# lvchange -an /dev/mapper/vg0-*
# cryptsetup luksClose sda2_crypt
Disable the netboot option from your administration panel, reboot, and hope it all goes well. If you followed every step carefully, a few minutes later you should be able to ping your server. Use this snippet to enter the password remotely:
$ stty -echo; ssh -o UserKnownHostsFile=$HOME/.ssh/known_hosts.initramfs \
  -o BatchMode=yes root@"$HOST" 'cat > /lib/cryptsetup/passfifo'; \
  stty echo
It is very important that you close the pipe (with control-D twice) without typing enter. For my servers, I have a script that reads the passphrase from a GPG-encrypted file and pipes it directly into the remote server. That way, I only type the GPG passphrase locally:
$ cat unlock.sh 
#!/bin/sh
BASE="$(dirname "$0")"
HOST="$1"
gpg --decrypt "$BASE"/key-"$HOST".gpg   \
    ssh -o UserKnownHostsFile="$BASE"/known_hosts.initramfs -o BatchMode=yes \
        root@"$HOST" 'cat > /lib/cryptsetup/passfifo'
It might be a good idea to create a long, impossible to guess passphrase that you can use in the GPG-encrypted file, and that you can also print and store in somewhere safe. See the luksAddKey function in the cryptsetup(8) man page. Once again, if everything went right, a few seconds later the openSSH server will replace the tiny dropbear and you will be able to access your server normally (and with the real SSH host key). Hope you find this article helpful! I would love to hear your feedback.

16 August 2013

Mart&iacute;n Ferrari: Impostor syndrome

Do you feel like an impostor? Do you fear other people realising that you are not as good as your peers? You are not alone, many people feel the same, and it goes away eventually. I know because I've been there, it is a really awful feeling that got me depressed and unhappy for months. But you can try to fight it! This is an excellent document from the Geek Feminism Wiki, that I found thanks to pabs@, make yourself a favour and read it: Impostor syndrome.

13 August 2013

Mart&iacute;n Ferrari: DebConf 13

On Sunday I arrived to DebConf 13. It has been so much fun that I didn't have the time to post anything about it! As usual, I really enjoy meeting old friends and putting faces to nick names. Last night the Cheese and Wine party was once again great. Not everything has been partying, though. I've been discussing with Enrico ideas for recognising Debian Contributors, as he presented on his talk on Sunday. We still have to discuss further, and obviously, sit down and write a lot of code :-) Yesterday we also met with Luk, and discussed what to do with the ancient net-tools package. We had had the idea of writing compatibility wrappers using iproute2, but that turned out to be too complicated and brittle. After looking a the current state of net-tools, and its reverse dependencies, we decided that the best way to go is to deprecate it by asking rdepends to migrate to iproute2 (for most of them it should be trivial), and then downgrade net-tools to optional. It won't be removed from the archive, as people will still want it, but it will not be required by any core functionality any more. In the next few days, we will be sending an email to debian-devel, and filling about 80 bugs to get rid of the dependency on net-tools, many with patches.

24 July 2013

Mart&iacute;n Ferrari: Of Grafton street and Hanbury lane

It finally happened, I'm at the boarding gate 423, about to get into the plane that will take me out of Dublin. It seems I didn't get convinced by this article. Still, I've tried to see most of the things on that list, and see a fair bit of the island. I am not sad, I was expecting myself to break down and make a big Greek drama, but nothing like that happened. In the end, Dublin is the one place (with Buenos Aires) where I'll be coming back often. Still, this song has been in my head for days.
<object data="http://www.youtube.com/v/aMxKggsz0fM&amp;fs=1&amp;border=1" height="364" width="445"> <param name="allowFullScreen" value="true"> <param name="allowscriptaccess" value="always"> <param name="movie" value="http://www.youtube.com/v/aMxKggsz0fM&amp;fs=1&amp;border=1"> </object>

14 May 2013

Mart&iacute;n Ferrari: A new life

A week ago, I made the big step and presented my resignation letter at Google. It was not an easy decision, to leave a good job to pursue a blurry plan that sounds a bit infeasible, but I feel this what I want to do: it's a dream becoming reality. After the 31st of May, I will become self-employed, working as a freelancer, while travelling around the world. I plan to live with a small budget, working with my laptop from wherever I am, instead of stressing about getting many clients to keep an expensive lifestyle. I've had the travelling bug for some time, always thinking about my next trip, leaving for the airport just after finishing work, coming back on Monday and going directly to the office. You end up wishing for more vacation days all the time (and I had a fair amount of them). Now, for different reasons I want to spend some time in my old house in Nice, and in Argentina. There was no way I could do that with my current job, and that was the trigger for my decision. After that, I will come back to Ireland, just to think where my next destination will be. I know this is going to be a great experience, we'll see how well it works! If you think you -or your employer- might need my services, I'd be more than happy to talk! I'll be concentrating on the kind of work I've been doing at Google and before: finding creative solutions for difficult problems, be it systems administration, or (systems) programming. Think of hiring an SRE for just a few hours or days.

29 April 2013

Mart&iacute;n Ferrari: Setting up my server: netfilter

I was going to start this series with explaining how I did the remote set-up, but instead I will share something that happened today. One of the first things you want to do when putting a server directly connected to the Internet is some filtering. You don't want to have an application listening on the network by mistake, so a simple netfilter firewall is a good way to ensure you are only accepting connections on ports you explicitly allowed. I have been a long-time user of ferm, a simple tool that will read a configuration file written in a special structured syntax, and generates iptables commands from it. I have used it successfully to build very complex firewalls in previous jobs, and it had the huge benefit of keeping your firewall description readable and easy to modify by other people. This time I thought I may go with something simpler, as I only wanted a handful of very simple netfilter rules. I looked at Shorewall, and browsed a bit a few others. But in the end I decided against them: there was the need to learn the tools' concepts about different parts of the network, or there were more slanted towards command-line commands, so your actual configuration will be some files in /var/lib, totally managed by the tool. With ferm, I just need to write a very small configuration file, which reads almost like iptables commands, and that's it. In fact, the default configuration placed by the Debian package, already did 90% of what I wanted: accept incoming SSH connections, ICMP packets, and reject everything else. I took the example IPv6 configuration from /usr/share/doc/ferm/examples/ipv6.ferm and in 10 minutes it was ready:
table filter  
    chain INPUT  
        policy DROP;
        mod state state INVALID DROP;
        mod state state (ESTABLISHED RELATED) ACCEPT;
        interface lo ACCEPT;
        proto icmp ACCEPT; 
        # allow IPsec
        proto udp dport 500 ACCEPT;
        proto (esp ah) ACCEPT;
        proto tcp dport ssh ACCEPT;
        proto tcp dport (http https) ACCEPT;
     
    chain OUTPUT policy ACCEPT;
    chain FORWARD policy DROP;
 
domain ip6 table filter  
    chain INPUT  
        policy DROP;
        mod state state INVALID DROP;
        mod state state (ESTABLISHED RELATED) ACCEPT;
        interface lo ACCEPT;
        proto ipv6-icmp ACCEPT;
        proto tcp dport ssh ACCEPT;
        proto tcp dport (http https) ACCEPT;
     
    chain OUTPUT policy ACCEPT;
    chain FORWARD policy DROP;
 
It is important to note than when doing this kind of thing on a remote machine, you want to make sure you don't get locked out by accident. My method is that before activating any dangerous change, I drop an at job to disable the firewall in a few minutes:
# echo /etc/init.d/ferm stop   at now +10min
warning: commands will be executed using /bin/sh
job 4 at Mon Apr 29 02:47:00 2013
And if everything goes well, I just remove the job:
# atrm 4
Update: As paravoid pointed out in the comments, now (read: since many years ago, but I've never noticed) ferm has a --interactive mode which will revert the changes if you get locked out, much like the screen resolution changing dialog in Gnome.
Another thing that you definitely want to do, is to have some kind of protection against the almost constant influx of brute-force attacks against SSH. Apart from the obvious PermitRootLogin=no setting, there are a couple of popular methods to stop people probing random username/password combinations (I am assuming here that you actually have sensible passwords, or no passwords at all): running SSH in a non-standard port, and the great fail2ban daemon. Since I don't like non-standard stuff, I installed fail2ban, which by default it will inspect /var/log/auth.log for SSH login failures and insert netfilter rules to block the offenders. Problem is, I don't like much how fail2ban inserts rules and chains into my very tidy netfilter configuration which I had just created. So, I added an "action" to do things my way: only create a service-related chain and insert rules there, I will call that chain from my main ferm.conf. Ferm runs early in the boot sequence, so this won't be a problem during normal operation. The only caveat is that after changing a configuration in ferm, I need to restart fail2ban so it will recreate the netfilter chains and rules, which were wiped by ferm. This is my configuration, note that I am ignoring the port and protocol: the whole IP is blocked for a few minutes.
# cat /etc/fail2ban/jail.local 
[DEFAULT]
action = iptables-fixed[name=%(__name__)s]
# cat /etc/fail2ban/action.d/iptables-fixed.conf
[Definition]
actionstart = iptables -N fail2ban-<name>
              iptables -I fail2ban -j fail2ban-<name>
actionstop = iptables -D fail2ban -j fail2ban-<name>
             iptables -F fail2ban-<name>
             iptables -X fail2ban-<name>
actioncheck = iptables -n -L   grep -q fail2ban-<name>
actionban = iptables -I fail2ban-<name> 1 -s <ip> -j DROP
actionunban = iptables -D fail2ban-<name> -s <ip> -j DROP
[Init]
name = default

28 April 2013

Mart&iacute;n Ferrari: Moving my stuff away from home

TL;DR version: I want to get rid of the small server running at home, I tell you here about the service I've chosen, and why I like it. In following posts, I'll explain how did I set it up remotely. Disclaimer: I am in now way affiliated with the companies I mention here (except for Picasa, as I am a Google employee), and don't get any bonuses for this post. I am only sharing this because I think it might be useful information for other people.
Being a frequent migrant means possessions are a burden. In my previous place of residence (in France), I originally intended to only stay for 6 months, and so I arrived with just a couple of suitcases, and in the end that was enough for me to live for almost 2 years. The last time, on the other hand, I was removing my stuff completely from Argentina. I emptied my house, gave away some stuff, send some boxes to my parents' place, and carried the rest with me. That was a lot of stuff, but since the company was paying for the relocation, it was not much of a problem. Later I realised my mistake, and knowing that my time in Ireland is limited, I started to try and get rid of stuff I don't need. I know I will just sell or give away much of my stuff when I finally leave, but there are some things that are not so easy to part with. The main one being my home server, which hosts this website, my VCS repositories, pictures, and many other things I need to have on the net. This all used to be located in a home-made PC tucked in a data centre, co-located by a friendly company. But that computer died almost 2 years ago, and so canterville became abhean, and my stuff started being hosted with my aDSL connection. It worked well for some time, but now I realised I had to revert that change. With this in mind I set off to find a cheap place to host my stuff. I had a few requirements: I don't have that many photos, nor they are too big, but these requirements made it clear that most VPS offerings were not going to work for me. For some reason I fail to understand, local storage in VPS offerings is usually prohibitively expensive. This is OK for most use cases, but not for mine. A friend of mine, with a similar use case, is a happy VPS customer. He told me his trick: he only hosts in the server low-quality versions of the pictures, and keeps the originals (and back-ups) at home. This was a great idea, but with two fatal flaws: I want to only carry around a laptop and one or two external hard drives; and I want to have back-ups that are not physically with me. I was starting to think about hosting my files in Amazon s3 or something like that, since most dedicated servers are way too expensive. But then I heard about two French companies offering dirt-cheap servers: OVH and Online.net. Both of them offered small servers for about 12 a month, cheaper than most VPS offerings! Online seems to mainly cater to the French market, and for some silly reason, they charge a 50 set-up fee to customers outside of France. OVH, on the other hand, has many local branches, including an Irish one, so I went with them. The offering is a low-cost line called Kimsufi, and the smallest one is still very decent for a personal server: Once I had paid the fee for one month, it took a while for it to be activated (their payment system is pretty bad), but it finally was enabled about 24h later. Then the real fun started. On one hand, I was happy to see a wide selections of operating systems to choose from, including Debian stable and testing, and a web console with many functionalities, including some basic monitoring; but on the other hand, I realised that the installed image was not pristine, the online docs are not very good, and the web application is a bit buggy and really awkward to navigate. Having sub-par docs is not something I would usually care much about, but it made it a bit more difficult to me to understand some of the very cool functionalities their system offers (more on that in a bit), and more importantly, it made it clear to me that I won't trust their image: the procedures detailed there were not exactly best-practices, and they allow themselves to log-in as root into my server. I want to describe here what I think are their most interesting features, that made it possible to me to do risky operations, like encrypting the root partition, and setting up a firewall; and being able to fix problems that would usually require physical access. These are found in their web console: a hardware reset, and configurable netboot support with many offered images, including a rescue image based on Ubuntu and one that serves as a virtual KVM. (It is surprising that these servers don't have a serial console, but at least the kernel does not detect any). With these in hand, I didn't have to fear being locked out of my server for ever. Just set up a netboot image and hard-reboot the machine! Also, it made it very simple to install my system from scratch with debootstrap. The virtual KVM is a very interesting trick. It is a netboot image that runs some tests, and fires up a web browser. You get an email with the URL and a password to access it, and then you open a page that offers you what is basically a Qemu connected to a VNC server which will boot from your real hard drive. It is super slow, but that allows you to get console access to your server, which can be very handy to debug booting problems, unless it is some issue with the real hardware. It also offers the possibility of downloading an ISO image off the network and booting that, so it can be used to run a stock installer CD too. In another post I'll describe how I reinstalled my server remotely, and some of the pitfalls that I've encountered in the process.

Next.

Previous.